Collective Knowledge (software)
Developer(s) | Grigori Fursin and the cTuning foundation |
---|---|
Initial release | 2015 |
Stable release | 2.6.3 (discontinued for the new Collective Mind framework[1])
/ November 30, 2022 |
Written in | Python |
Operating system | Linux, Mac OS X, Microsoft Windows, Android |
Type | Knowledge management, FAIR data, MLOps, Data management, Artifact Evaluation, Package management system, Scientific workflow system, DevOps, Continuous integration, Reproducibility |
License | Apache License for version 2.0 and BSD License 3-clause for version 1.0 |
Website | github |
The Collective Knowledge (CK) project is an open-source framework and repository to enable collaborative, reproducible and sustainable research and development of complex computational systems.[2] CK is a small, portable, customizable and decentralized infrastructure helping researchers and practitioners:
- share their code, data and models as reusable Python components and automation actions[3] with unified JSON API, JSON meta information, and a UID based on FAIR principles[2]
- assemble portable workflows from shared components (such as multi-objective autotuning and Design space exploration[4])
- automate, crowdsource and reproduce benchmarking of complex computational systems[5]
- unify predictive analytics (scikit-learn, R, DNN)
- enable reproducible and interactive papers[6]
Notable usages
[edit]- ARM uses CK to accelerate computer engineering[7]
- Several ACM-sponsored conferences use CK to automate the Artifact Evaluation process[8][9]
- Imperial College (London) uses CK to automate and crowdsource compiler bug detection[10]
- Researchers from the University of Cambridge used CK to help the community reproduce results of their publication in the International Symposium on Code Generation and Optimization (CGO'17) during Artifact Evaluation[11]
- General Motors (USA) uses CK to crowd-benchmark convolutional neural network optimizations [12][13]
- The Raspberry Pi Foundation and the cTuning foundation released a CK workflow with a reproducible "live" paper to enable collaborative research into multi-objective autotuning and machine learning techniques[4]
- IBM uses CK to reproduce quantum results from nature[14]
- CK is used to automate MLPerf benchmark[15][16]
Portable package manager for portable workflows
[edit]CK has an integrated cross-platform package manager with Python scripts, JSON API and JSON meta-description to automatically rebuild software environment on a user machine required to run a given research workflow.[17]
Reproducibility of experiments
[edit]CK enables reproducibility of experimental results via community involvement similar to Wikipedia and physics. Whenever a new workflow with all components is shared via GitHub, anyone can try it on a different machine, with different environment and using slightly different choices (compilers, libraries, data sets). Whenever an unexpected or wrong behavior is encountered, the community explains it, fixes components and shares them back as described in.[4]
References
[edit]- ^ CK package at PYPI
- ^ a b Fursin, Grigori (29 March 2021). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common APIs. Philosophical Transactions of the Royal Society. arXiv:2011.01149. doi:10.1098/rsta.2020.0211.
- ^ Reusable CK components and actions to automate common research tasks
- ^ a b c Live paper with reproducible experiments to enable collaborative research into multi-objective autotuning and machine learning techniques
- ^ Online repository with reproduced results
- ^ Index of reproduced papers
- ^ Ed Plowman; Grigori Fursin, ARM TechCon'16 presentation "Know Your Workloads: Design more efficient systems!"
- ^ Artifact Evaluation for systems and machine learning conferences
- ^ ACM TechTalk about reproducing 150 research papers and testing them in the real world
- ^ EU TETRACOM project to combine CK and CLSmith (PDF), archived from the original (PDF) on 2017-03-05, retrieved 2016-09-15
- ^ Artifact Evaluation Reproduction for "Software Prefetching for Indirect Memory Accesses", CGO 2017, using CK, 16 October 2022
- ^ GitHub development website for CK-powered Caffe, 11 October 2022
- ^ Open-source Android application to let the community participate in collaborative benchmarking and optimization of various DNN libraries and models
- ^ Reproducing quantum results from nature – how hard could it be?
- ^ MLPerf crowd-benchmarking
- ^ MLPerf inference benchmark automation guide, 17 October 2022
- ^ List of shared CK packages